Medical Image Analysis
○ Elsevier BV
All preprints, ranked by how well they match Medical Image Analysis's content profile, based on 33 papers previously published here. The average preprint has a 0.06% match score for this journal, so anything above that is already an above-average fit. Older preprints may already have been published elsewhere.
Rahi, A.
Show abstract
Accurate cardiac MRI segmentation is essential for quantitative analysis of cardiac structure and function in clinical practice. In this study, we propose an ensemble framework combining several improved UNet-based architectures to achieve robust and clinically reliable segmentation performance. The ensemble integrates multiple models, including variants of standard UNet, Residual UNet, and Attention UNet, optimized through extensive hyperparameter tuning and data augmentation on the CAMUS subject-based dataset. Experimental results demonstrate that our approach achieves a Dice similarity coefficient of 0.91, surpassing several state- of-the-art methods reported in recent literature. Moreover, the proposed ensemble exhibits exceptional stability across subjects and maintains high generalization performance, indicating its strong potential for real-world clinical deployment. This work highlights the effectiveness of ensemble deep learning techniques for cardiac image segmentation and represents a promising step towards clinical-grade automated analysis in cardiac imaging.
Rahi, A.
Show abstract
Cardiac MRI segmentation remains a critical yet challenging task in medical image analysis, particularly for accurate delineation of multi-class cardiac structures using standard public datasets like CAMUS. In this work, we introduce CAMUS-HeartNet, a deep meta-ensemble architecture combining multiple U-Net variants with a meta-learner that intelligently fuses their predictions. We rigorously evaluate our method on the CAMUS dataset and achieve global mean Dice = 0.9298 and overall pixel accuracy = 96.80 %, surpassing many existing models applied to this dataset. Class-wise Dice scores -- Background: 0.9861, LV: 0.9424, Myocardium: 0.8792, RV: 0.9115 -- attest to the models strength even in challenging myocardial boundaries. AUC values exceed 0.99 for all classes, indicating exceptional discrimination capacity. To the best of our knowledge, no prior study on CAMUS has reported consistently such high performance across all cardiac structures simultaneously with a meta-ensemble strategy. This work demonstrates that meta-learner guided ensembling can push the frontier of automated cardiac tissue segmentation, offering a robust and accurate tool for downstream clinical and research applications.
Boreiko, V.; Ilanchezian, I.; Ayhan, M.; Müller, S.; Koch, L. M.; Faber, H.; Berens, P.; Hein, M.
Show abstract
In medical image classification tasks like the detection of diabetic retinopathy from retinal fundus images, it is highly desirable to get visual explanations for the decisions of black-box deep neural networks (DNNs). However, gradient-based saliency methods often fail to highlight the diseased image regions reliably. On the other hand, adversarially robust models have more interpretable gradients than plain models but suffer typically from a significant drop in accuracy, which is unacceptable for clinical practice. Here, we show that one can get the best of both worlds by ensembling a plain and an adversarially robust model: maintaining high accuracy but having improved visual explanations. Also, our ensemble produces meaningful visual counterfactuals which are complementary to existing saliency-based techniques. Code is available under https://github.com/valentyn1boreiko/Fundus_VCEs.
Xu, H.; Woicik, A.; Asadian, S.; Shen, J.; Zhang, Z.; Nabipoor, A.; Musi, J. P.; Keenan, J.; Khorsandi, M.; Al-Alao, B.; Dimarakis, I.; Chalian, H.; Lin, Y.; Fishbein, D.; Pal, J.; Wang, S.; Lin, S.
Show abstract
Heart failure is a major cause of morbitidy and mortality, with the severest forms requiring heart transplantation. Heart size matching between the donor and recipient is a critical step in ensuring a successful transplantation. Currently, a set of equations based on population measures of height, weight, sex and age, viz. predicted heart mass (PHM), are used but can be improved upon by personalized information from recipient and donor chest CT images. Here, we developed GigaHeart, the first heart-specific foundation model pretrained on 180,897 chest CT volumes from 56,607 patients. The key idea of GigaHeart is to direct the foundation models attention towards the heart by contrasting the heart region and the entire chest, thereby encouraging the model to capture fine-grained cardiac features. GigaHeart achieves the best performance on 8 cardiac-specific classification tasks and further, exhibits superior performance on cross-modal tasks by jointly modeling CT images and reports. We similarly developed a thorax-specific foundation model and observed promising performance on 9 thorax-specific tasks, indicating the potential to extend GigaHeart to other organ-specific foundation models. More importantly, GigaHeart addresses the heart sizing problem. It avoids oversizing by correctly segmenting the sizes of hearts of donors and recipients. In regressions against actual heart masses, our AI-segmented total cardiac volumes (TCVs) has a 33.3% R2 improvement when compared to PHM. Meanwhile, GigaHeart also solves the undersizing problem by adding a regression layer to the model. Specifically, GigaHeart reduces the mean squared error by 57% against PHM. In total, we show that GigaHeart increases the acceptable range of donor heart sizes and matches more accurately than the widely used PHM equations. In all, GigaHeart is a state-of-the-art, cardiac-specific foundation model with the key innovation of directing the models attention to the heart. GigaHeart can be finetuned for accomplishing a number of tasks accurately, of which AI-assisted heart sizing is a novel example.
Quintas, I.; Bontempi, D.; Bors, S.; Trofimova, O.; Boettger, L.; Iuliani, I.; Ortin Vela, S.; Bergmann, S.; Presby, D. M.
Show abstract
Cardiorespiratory fitness (CRF) is a strong predictor of cardiovascular events and all-cause mortality, often outperforming traditional risk factors. However, its clinical assessment remains limited due to the need for specialized equipment, personnel, and time demands. Because CRF is closely tied to vascular health, surrogate measures that capture vascular features may provide a practical alternative for its estimation. Retinal Color Fundus Images (CFIs) provide a non-invasive window into systemic vascular health and have already proven useful in predicting cardiovascular risk factors and diseases. However, CFIs have yet to be explored for their potential to predict CRF. In this study, we introduce RetFit, a novel CRF estimator derived from CFIs by leveraging state-of-the-art vision transformers. We evaluated RetFits clinical relevance by analyzing its associations with cardiovascular risk factors and disease outcomes, and exploring its genetic architecture, benchmarking it against a submaximal-exercise-test CRF (SETCRF) estimate. RetFit was prognostic of both cardiovascular events (hazard ratios as low as 0.668, 95%CI 0.617-0.723, p<0.001) and overall mortality (hazard ratios as low as 0.780, 95%CI 0.754-0.801, p<0.001), and significantly associated with the majority of disease states and risk factors explored, with these effects being consistent across two external and independent cohorts. Although RetFit and SETCRF shared a moderate phenotypic correlation (r=0.45), their significant genetic associations were disjoint. Interpretability analyses suggest a role for retinal vasculature in RetFits predictions, with attention maps emphasizing vascular regions and segmentation analyses showing arterial bifurcation count as the strongest associated feature ({beta}=0.287, 95% CI 0.263-0.311, p<0.001). These findings highlight the potential of retinal imaging as a scalable, cost-effective, and accessible alternative for CRF estimation, supporting its use in large-scale screening and risk stratification in both clinical and public health contexts.
Gervelmeyer, J.; Mueller, S.; Djoumessi, K.; Merle, D.; Clark, S. J.; Koch, L.; Berens, P.
Show abstract
In the elderly, degenerative diseases often develop differently over time for individual patients. For optimal treatment, physicians and patients would like to know how much time is left for them until symptoms reach a certain stage. However, compared to simple disease detection tasks, disease progression modeling has received much less attention. In addition, most existing models are black-box models which provide little insight into the mechanisms driving the prediction. Here, we introduce an interpretable-by-design survival model to predict the progression of age-related macular degeneration (AMD) from fundus images. Our model not only achieves state-of-the-art prediction performance compared to black-box models but also provides a sparse map of local evidence of AMD progression for individual patients. Our evidence map faithfully reflects the decision-making process of the model in contrast to widely used post-hoc saliency methods. Furthermore, we show that the identified regions mostly align with established clinical AMD progression markers. We believe that our method may help to inform treatment decisions and may lead to better insights into imaging biomarkers indicative of disease progression. The projects code is available at github.com/berenslab/interpretable-deep-survival-analysis.
Kasa, L. W.; Schierding, W.; Kwon, E.; Holdsworth, S.; Danesh-Meyer, H. V.
Show abstract
Glaucoma is increasingly recognized as a neurodegenerative condition involving both retinal and central nervous system structures. Here, we present an integrated framework that combines MK-Curve-corrected diffusion kurtosis imaging (DKI), tractometry, and deep autoencoder-based normative modeling to detect localized white matter abnormalities associated with glaucoma. Using UK Biobank diffusion MRI data, we show that MK-Curve approach corrects anatomically implausible values and improves the reliability of DKI metrics - particularly mean (MK), radial (RK), and axial kurtosis (AK) - in regions of complex fiber architecture. Tractometry revealed reduced MK in glaucoma patients along the optic radiation, inferior longitudinal fasciculus, and inferior fronto-occipital fasciculus, but not in a non-visual control tract, supporting disease specificity. These abnormalities were spatially localized, with significant changes observed at multiple points along the tracts. MK demonstrated greater sensitivity than MD and exhibited altered distributional features, reflecting microstructural heterogeneity not captured by standard metrics. Node-wise MK values in the right optic radiation showed weak but significant correlations with retinal OCT measures (ganglion cell layer and retinal nerve fiber layer thickness), reinforcing the biological relevance of these findings. Deep autoencoder-based modeling further enabled subject-level anomaly detection that aligned spatially with group-level changes and outperformed traditional approaches. Together, our results highlight the potential of advanced diffusion modeling and deep learning for sensitive, individualized detection of glaucomatous neurodegeneration and support their integration into future multimodal imaging pipelines in neuro-ophthalmology.
Inacio, M. H. d. A.; Shah, M.; Jafari, M.; Shehata, N.; Meng, Q.; Bai, W.; Gandy, A.; Glocker, B.; O'Regan, D. P.
Show abstract
The function of the human heart is characterised by complex patterns of motion that change throughout our lifespan due to accumulated damage across biological scales. Understanding the drivers of cardiac ageing is key to developing strategies for attenuating age-related processes. The motion of the surface of the heart can be conceived as a graph of connected points in space moving through time. Here we develop a generalisable framework for modelling three-dimensional motion as a graph and apply it to a task of predicting biological age. Using sequences of segmented cardiac imaging from 5064 participants in UK Biobank we train a graph neural network (GNN) to learn motion traits that predict healthy ageing. The GNN outperformed (mean absolute error, MAE = 4.74 years) a comparator dense neural network and boosting methods (MAE = 4.90 years and 5.08 years, respectively). We produce human-intelligible explanations of the predictions and using the trained model we also assess the effect of hypertension on biological age. This work shows how graph representations of complex motion can efficiently predict biologically meaningful outcomes.
Xu, R.; Jiang, S.; Zhai, Y.; Chen, Y.
Show abstract
Background: Segmentation of the left ventricular myocardium, left ventricular cavity, and right ventricular cavity on short-axis cine cardiac magnetic resonance (CMR) images is essential for quantifying cardiac structure and function. However, existing automated segmentation tools are limited by small training datasets, narrow disease coverage, restrictive input format requirements, and the absence of anatomical plausibility constraints, hindering their clinical adoption. Methods: We constructed the largest annotated CMR short-axis segmentation dataset to date, comprising 1,555 subjects from 12 centers with five cardiac disease types and full cardiac cycle annotations totaling 319,175 labeled images. A MedNeXt-L model was trained using a 2D slice-by-slice strategy with full field-of-view input, eliminating dependencies on 3D volumes, temporal sequences, or region-of-interest(ROI) localization. A deterministic three-step post-processing pipeline was designed to enforce anatomical priors: connected component constraint, containment relationship constraint, and gap-filling constraint. The model was validated on an internal test set (310 subjects) and three independent public external datasets (ACDC, M&Ms1, M and Ms2; 855 subjects from 6 additional centers across 3 countries), spanning 15 cardiac disease categories-10 of which were never encountered during training. Results: The model achieved mean Dice similarity coefficients (DSC) of 0.913 {+/-} 0.037 and 0.911 {+/-} 0.040 on internal and external test sets, respectively, with a cross-domain performance gap of only 0.002. Post-processing eliminated all containment violations (7.5% [->] 0%) and gap errors (1.8% [->] 0%) while reducing fragment rates by 85.5% (9.0% [->] 1.3%). Zero-shot generalization to 10 unseen disease categories yielded DSC values ranging from 0.899 to 0.921. Automated clinical functional parameters demonstrated excellent agreement with manual measurements for left ventricular indices and right ventricular volumes (intraclass correlation coefficients [≥] 0.977). Conclusions: CorSeg-CineSAX provides a robust, open-source framework for fully automatic CMR short-axis segmentation across diverse clinical scenarios. All source code and pre-trained weights are publicly available at https://github.com/RunhaoXu2003/CorSeg.
aggarwal, s.
Show abstract
Deep learning models for the screening of diabetic retinopathy (DR) have achieved near-human performance on benchmark datasets, but their performance deteriorates in real-world settings due to imaging artifacts such as glare, blur, and reflections. Current public datasets such as DDR contain high-quality fundus images, but they lack the variability and imperfections seen in handheld fundus photography. This mismatch results in models that fail in practice, particularly in low-resource environments where handheld cameras are widely deployed. We introduce DDR-Augmented-Artifacts, an artifact-augmented extension of the DDR dataset that simulates realistic reflection artifacts via patch-based Poisson blending. Unlike prior datasets that exclude noisy images, our dataset explicitly models these challenges, allowing researchers to benchmark and train models that are robust to real-world noise. The dataset, augmentation scripts, and a sample demonstration model are publicly available at: O_LIGitHub: https://github.com/Shubham2376G/DR_Artifacts C_LIO_LIHugging Face: https://huggingface.co/datasets/shubham212/DR_Artifacts C_LI
Kechagias-Stamatis, O.; Aouf, N.; Koukos, J.
Show abstract
The outbreak of the novel coronavirus (COVID-19) disease has spurred a tremendous research boost aiming at controlling it. Under this scope, deep learning techniques have received even more attention as an asset to automatically detect patients infected by COVID-19 and reduce the doctors burden to manually assess medical imagery. Thus, this work considers a deep learning architecture that fuses the layers of current-state-of-the-art deep networks to produce a new structure-fused deep network. The advantages of our deep network fusion scheme are multifold, and ultimately afford an appealing COVID-19 automatic diagnosis that outbalances current deep learning methods. Indeed, evaluation on Computer Tomography (CT) and X-ray imagery considering a two-class (COVID-19/ non-COVID-19) and a four-class (COVID-19/ non-COVID-19/ Pneumonia bacterial / Pneumonia virus) classification problem, highlights the classification capabilities of our method attaining 99.3% and 100%, respectively.
Zhou, M.; Zhang, M.; Wang, J.; Shao, C.; Yan, G.
Show abstract
Cardiovascular disease is one of the leading causes of death worldwide, with myocardial infarction (MI) being a major cause of both morbidity and mortality among cardiovascular patients. MI Patients face a higher risk of cardiovascular disease recurrence afterwards. Therefore, accurately predicting the risk of recurrence and identifying key risk factors are crucial for clinical decision-making. In this paper, we consider the interrelationships among cardiovascular factors from a systemic perspective. We first construct a differential network for each patient to capture individual-specific deviations in factor relationships and propose a novel method, termed Causal Factor-aware Graph Neural Network (CFGNN), which integrates factor interactions to predict the recurrence risk of MI patients while uncovering key risk factors from a causal perspective. Experimental results demonstrate that CFGNN performs well on hospital-derived datasets in real world, effectively identifying several key risk factors. This method not only deepens our understanding of cardiovascular disease, but also paves the way for more targeted and effective interventions.
Ahmed, F.; Coskunuzer, B.
Show abstract
The analysis of fundus images for the early screening of eye diseases is of great clinical importance. Traditional methods for such analysis are time-consuming and expensive as they require a trained clinician. Therefore, the need for a comprehensive and automated clinical decision support system to diagnose and grade retinal diseases has long been recognized. In the past decade, with the substantial developments in computer vision and deep learning, machine learning methods have become highly effective in this field to address this need. However, most of these algorithms face challenges like computational feasibility, reliability, and interpretability. In this paper, our contributions are two-fold. First, we introduce a very powerful feature extraction method for fundus images by employing the latest topological data analysis methods. Through our experiments, we observe that our topological feature vectors are highly effective in distinguishing normal and abnormal classes for the most common retinal diseases, i.e., Diabetic Retinopathy (DR), Glaucoma, and Age-related Macular Degeneration (AMD). Furthermore, these topological features are interpretable, computationally feasible, and can be seamlessly integrated into any forthcoming ML model in the domain. Secondly, we move forward in this direction, constructing a topological deep learning model by integrating our topological features with several deep learning models. Empirical analysis shows a notable enhancement in performance aided by the use of topological features. Remarkably, our model surpasses all existing models, demonstrating superior performance across several benchmark datasets pertaining to two of these three retinal diseases.
Ilanchezian, I.; Kobak, D.; Faber, H.; Ziemssen, F.; Berens, P.; Ayhan, M. S.
Show abstract
Deep neural networks (DNNs) are able to predict a persons gender from retinal fundus images with high accuracy, even though this task is usually considered hardly possible by ophthalmologists. Therefore, it has been an open question which features allow reliable discrimination between male and female fundus images. To study this question, we used a particular DNN architecture called BagNet, which extracts local features from small image patches and then averages the class evidence across all patches. The BagNet performed on par with the more sophisticated Inception-v3 model, showing that the gender information can be read out from local features alone. BagNets also naturally provide saliency maps, which we used to highlight the most informative patches in fundus images. We found that most evidence was provided by patches from the optic disc and the macula, with patches from the optic disc providing mostly male and patches from the macula providing mostly female evidence. Although further research is needed to clarify the exact nature of this evidence, our results suggest that there are localized structural differences in fundus images between genders. Overall, we believe that BagNets may provide a compelling alternative to the standard DNN architectures also in other medical image analysis tasks, as they do not require post-hoc explainability methods.
Li, H.; Zhuo, C.; Cui, Z.; Cieslak, M.; Salo, T.; Gur, R. E.; Gur, R. C.; Shinohara, R. T.; Oathes, D. J.; Davatzikos, C.; Satterthwaite, T. D.; Fan, Y.
Show abstract
The brains multi-scale hierarchical organization supports functional segregation and integration. Characterizing the hierarchy of individualized multi-scale functional networks (FNs) is crucial for understanding these fundamental brain processes. It provides promising opportunities for both basic neuroscience and translational research in neuropsychiatric illness. However, current methods typically compute individualized FNs at a single scale and are not equipped to quantify any possible hierarchical organization. To address this limitation, we present a self-supervised deep learning (DL) framework that simultaneously computes multi-scale FNs and characterizes their across-scale hierarchical structure at the individual level. Our method learns intrinsic representations of fMRI data in a low-dimensional latent space to effectively encode multi-scale FNs and their hierarchical structure by optimizing functional homogeneity of FNs across scales jointly in an end-to-end learning manner. A DL model trained on fMRI scans from the Human Connectome Project successfully identified individualized multi-scale hierarchical FNs for unseen individuals and generalized to two external cohorts. Furthermore, the individualized hierarchical structure of FNs was significantly associated with biological phenotypes, including sex, brain development, and brain health. Our framework provides an effective method to compute multi-scale FNs and to characterize the inter-scale hierarchy of FNs for individuals, facilitating a comprehensive understanding of brain functional organization and its inter-individual variation.
Nezamabadi, K.; Sivalokanathan, S.; Lee, J. W.; Tanriverdi, T.; Chen, M.; Lu, D.-Y.; Abraham, J.; Sardaripour, N.; Li, P.; Mousavi, P.; Abraham, M. R.
Show abstract
Left ventricular (LV) scar is a risk factor for sudden cardiac death and heart failure in hypertrophic cardiomyopathy (HCM). LV scar is frequent in HCM and evolves over time. Hence there is a need for LV scar detection and longitudinal monitoring. The current gold standard for LV scar detection is late gadolinium enhancement (LGE) on magnetic resonance imaging (MRI), which is limited by high cost and susceptibility to artifacts from implanted defibrillators. We introduce XplainScar, the first explainable machine learning method for LV scar detection and localization in HCM, using 12-lead electrocardiogram (ECG) data, which is not influenced by implanted devices. We use 500 patients from the JH-HCM Registry for model development, and 248 patients from the UCSF-HCM-Registry for validation. XplainScar combines unsupervised and self-supervised ECG representation learning, resulting in high precision (90%), sensitivity (95%), specificity (80%) and F1-score (90%) for scar detection in the basal, mid, and apical LV myocardium, with a processing time of <1 minute per 10 patients. Basal LV scar prediction by XplainScar is dominated by QRS features, and mid/apical LV scar by T wave features. XplainScar generalizes well to the held-out test UCSF data, with 88% precision, 90% sensitivity, 78% specificity, and F1-score of 89%. In summary, XplainScar demonstrates good performance for LV scar detection, and provides ECG signatures of basal, mid, and apical LV scar in HCM. XplainScar is publicly available at https://github.com/KasraNezamabadi/XplainScar
Nabil, A. S.; Gholami, S.; Leng, T.; Lim, J. I.; ALAM, M. N.
Show abstract
Federated learning enables collaborative model training across multiple institutions while preserving patient data privacy. This study evaluates five different aggregation strategies (FedAvg, FedAdagrad, FedYogi, FedProx, and FedMRI) for federated learning in the context of multi-disease retinal disease classification using optical coherence tomography angiography (OCTA). We tested these approaches on a diverse dataset combining public OCTA-500 and private data provided by the University of Illinois Chicago (UIC) across seven distinct retinal pathologies, comparing performance against centralized and standalone models in three experimental scenarios of varying class complexity. Our results demonstrate that federated approaches can match or even exceed centralized training performance, with FedMRI achieving 60.87% accuracy in the comprehensive seven-class scenario and all three primary federated methods (FedAvg, FedProx, FedMRI) outperforming centralized training in simplified class scenarios (72.09% vs 69.77%). We observed that different aggregation strategies excel in different performance metrics--FedMRI consistently demonstrated superior ROC-AUC performance while FedAvg showed stronger F1-scores, suggesting better class balance management. These findings provide practical insights for implementing privacy-preserving collaborative AI systems in OCTA-based ophthalmic diagnostics.
Upadhyayula, S. K.
Show abstract
Coronary artery disease (CAD), primarily driven by atherosclerosis, poses significant health risks, contributing to a rising mortality rate globally. This study introduces a deep learning framework designed for the automated segmentation of coronary arteries and quantification of coronary artery calcium (CAC) from CT scans, facilitating improved risk stratification in patients. Leveraging data from the National Lung Screening Trial, we developed a three-step model that includes heart localization, coronary calcium segmentation, and calcium scoring. Various configurations of the UNet architecture were employed, with the Extended UNet utilizing an autoencoder achieving the highest validation performance, reflected by an Intersection over Union (IoU) score of 0.78 and an F1 score of 0.83. The models efficacy was validated against manually segmented masks, showcasing its potential for accurate risk assessment based on CAC scores. This automated approach significantly reduces the time and expertise required for traditional calcium scoring, enabling rapid and reliable assessments in clinical settings. Our findings indicate that the deep learning system can effectively classify patients into risk categories, underscoring its potential utility in enhancing the management of CAD and improving patient outcomes. This research highlights the feasibility of integrating advanced computational techniques into routine clinical practice, paving the way for more efficient cardiovascular risk stratification.
Kritopoulos, G.; Neofotistos, G.; Barmparis, G. D.; Tsironis, G. P.
Show abstract
Class imbalance in clinical electrocardiogram (ECG) datasets limits the diagnostic sensitivity of automated arrhythmia classifiers, particularly for rare but clinically significant beat types. We propose a three-stage hybrid generative pipeline that combines a spectral-guided conditional Variational Autoencoder (cVAE), a class-conditional latent Denoising Diffusion Probabilistic Model (DDPM), and a Quantum Latent Refinement (QLR) module built on parameterized quantum circuits to augment minority arrhythmia classes in the MIT-BIH Arrhythmia Database. The QLR module applies a bounded residual correction guided by Maximum Mean Discrepancy minimization to align synthetic latent distributions with real class-specific latent banks. A lightweight 1D MobileNetV2 classifier evaluated over five independent random seeds and four augmentation ratios serves as the downstream benchmark. Our findings establish latent diffusion augmentation as an effective strategy for imbalanced ECG classification and motivate further investigation of quantum-classical hybrid methods in cardiac diagnostics.
Fadnavis, S.; Polosecki, P.; Garyfallidis, E.; Castro, E.; Cecchi, G.
Show abstract
Detecting neuro-degenerative disorders in early-stage and asymptomatic patients is challenging. Diffusion MRI (dMRI) has shown great success in generating biomarkers for cellular organization at the microscale level using complex biophysical models, but there has never been a consensus on a clinically usable standard model. Here, we propose a new framework (MVD-Fuse) to integrate measures of diverse diffusion models to detect alterations of white matter microstructure. The spatial maps generated by each measure are considered as a different diffusion representation (view), the fusion of these views being used to detect differences between clinically distinct groups. We investigate three different strategies for performing intermediate fusion: neural networks (NN), multiple kernel learning (MKL) and multi-view boosting (MVB). As a proof of concept, we applied MVD-Fuse to a classification of premanifest Huntingtons disease (pre-HD) individuals and healthy controls in the TRACK-ON cohort. Our results indicate that the MVD-Fuse boosts predictive power, especially with MKL (0.90 AUC vs 0.85 with the best single diffusion measure). Overall, our results suggest that an improved characterization of pathological brain microstructure can be obtained by combining various measures from multiple diffusion models.